In your final repo, there should be an R markdown file that organizes all computational steps for evaluating your proposed Facial Expression Recognition framework.
This file is currently a template for running evaluation experiments. You should update it according to your codes but following precisely the same structure.
Provide directories for training images. Training images and Training fiducial points will be in different subfolders.
In this chunk, we have a set of controls for the evaluation experiments.
Using cross-validation or independent test set evaluation, we compare the performance of models with different specifications. In this Starter Code, we tune parameter k (number of neighbours) for KNN.
#train-test split
info <- read.csv("../data/train_set/label.csv")
n <- nrow(info)
n_train <- round(n*(4/5), 0)
train_idx <- sample(info$Index, n_train, replace = F)
test_idx <- setdiff(info$Index,train_idx)
If you choose to extract features from images, such as using Gabor filter, R memory will exhaust all images are read together. The solution is to repeat reading a smaller batch(e.g 100) and process them.
n_files <- length(list.files(train_image_dir))
image_list <- list()
for(i in 1:100){
image_list[[i]] <- readImage(paste0(train_image_dir, sprintf("%04d", i), ".jpg"))
}
Fiducial points are stored in matlab format. In this step, we read them and store them in a list.
#function to read fiducial points
#input: index
#output: matrix of fiducial points corresponding to the index
readMat.matrix <- function(index){
return(round(readMat(paste0(train_pt_dir, sprintf("%04d", index), ".mat"))[[1]],0))
}
#load fiducial points
fiducial_pt_list <- lapply(1:n_files, readMat.matrix)
save(fiducial_pt_list, file="../output/fiducial_pt_list.RData")
The follow plots show how pairwise distance between fiducial points can work as feature for facial emotion recognition.
Figure1
feature.R should be the wrapper for all your feature engineering functions and options. The function feature( ) should have options that correspond to different scenarios for your project and produces an R object that contains features and responses that are required by all the models you are going to evaluate later.
feature.RIn order to save time, I comment this crunk of code.
source("../lib/feature.R")
tm_feature_train <- NA
if(run.feature.train){
tm_feature_train <- system.time(dat_train <- feature(fiducial_pt_list, train_idx))
save(dat_train, file="../output/feature_train.RData")
saveRDS(tm_feature_train, "../output/tm_feature_train.RDS")
}
tm_feature_test <- NA
if(run.feature.test){
tm_feature_test <- system.time(dat_test <- feature(fiducial_pt_list, test_idx))
save(dat_test, file="../output/feature_test.RData")
saveRDS(tm_feature_test, "../output/tm_feature_test.RDS")
}
load("../output/feature_train.RData")
load("../output/feature_test.RData")
tm_feature_train <- readRDS("../output/tm_feature_train.RDS")
tm_feature_test <- readRDS("../output/tm_feature_test.RDS")
source("../lib/cross_validation_gbm.R")
source("../lib/train_gbm.R")
source("../lib/test_gbm.R")
source("../lib/feature.R")
tm_train_gbm_baseline <- NA
if(run.gbm){
# Train the Baseline GBM model
tm_train_gbm_baseline <- system.time(gbm.baseline <- train_gbm(train_data = dat_train, s=0.001, K=2, n=50))
# Save the output
saveRDS(gbm.baseline, file="../output/gbm.baseline.RDS")
saveRDS(tm_train_gbm_baseline, file="../output/tm_train_gbm_baseline.RDS")
}
tm_train_gbm_baseline <- readRDS("../output/tm_train_gbm_baseline.RDS")
gbm.baseline <- readRDS("../output/gbm.baseline.RDS")
run.test = TRUE
tm_test = NA
tm_test_gbm_baseline <- NA
if(run.test.gbm){
tm_test_gbm_baseline <- system.time(pred_gbm_baseline <- test_gbm(gbm.fit.model = gbm.baseline, input.test = dat_test[,-6007], n = 50))
save(pred_gbm_baseline, file="../output/pred_gbm_baseline.RData")
saveRDS(tm_test_gbm_baseline, file="../output/tm_test_gbm_baseline.RDS")
}
accuracy_baseline_gbm <- mean(dat_test$label == pred_gbm_baseline)
pred_gbm_baseline_num <- as.numeric(pred_gbm_baseline)
tpr.fpr <- WeightedROC(pred_gbm_baseline_num, dat_test$label)
auc <- WeightedAUC(tpr.fpr)
cat("The accuracy of model: GBM baseline is", mean(dat_test$label == pred_gbm_baseline)*100, "%.\n")
## The accuracy of model: GBM baseline is 81.5 %.
cat("The AUC of model: GBM baseline is", auc, ".\n")
## The AUC of model: GBM baseline is 0.5 .
# Confusion Matrix
# library(caret)
#confusionMatrix(dat_test$label, as.factor(pred_gbm_baseline))
Prediction performance matters, so does the running times for constructing features and for training the model, especially when the computation resource is limited.
# tm_train_gbm <- 72.382
# tm_test_gbm <- 18.267
print(paste("Time for constructing training features=", tm_feature_train[1], "s"))
## [1] "Time for constructing training features= 1.348 s"
print(paste("Time for constructing testing features=", tm_feature_test[1], "s"))
## [1] "Time for constructing testing features= 0.24799999999999 s"
print(paste("Time for training model=", tm_train_gbm_baseline[1], "s"))
## [1] "Time for training model= 68.6119999999992 s"
print(paste("Time for testing model=", tm_test_gbm_baseline[1], "s"))
## [1] "Time for testing model= 17.815 s"
Same as Baseline model
Using cross-validation, we compare the performance of models with different specifications. In the following chunk of code, we tune parameter cost_svm (number of shrinkage) for the Support vector machine (SVM)
source("../lib/SVM_model.R")
# SVM Cross-validation
cost = c(0.00001, 0.0001, 0.001, 0.01, 0.1)
model_labels_svm = paste("SVM with cost =", cost)
model_labels_svm
# err_svm <- matrix(0, nrow = length(cost), ncol = 2)
# for(i in 1:length(cost)){
# print(paste("cost=", cost[i]))
# err_svm[i,] <- CV_SVM(dat_train, K=5, cost[i])
# save(err_svm, file="../output/err_svm.RData")
# }
#Load visualization of cross validation results of svm
load("../output/err_svm.RData")
err_svm <- as.data.frame(err_svm)
colnames(err_svm) <- c("mean_error", "sd_error")
cost = c(0.00001, 0.0001, 0.001, 0.01, 0.1)
err_svm$cost = as.factor(cost)
err_svm %>% ggplot(aes(x = cost, y = mean_error, ymin = mean_error - sd_error, ymax = mean_error + sd_error)) +
geom_crossbar() +
theme(axis.text.x = element_text(angle = 90, hjust = 1))
Find the best cost for SVM model and Run training and testing, then save them as RDS files. In order to save time, I commend out this chunk of code
# cost_best_svm <- cost[which.min(err_svm[,1])]
# #par_best_svm <- list(cost = cost_best_svm)
# # Training
# tm_train_svm = NA
# tm_train_svm <- system.time(fit_train_svm <- svm(label ~., data = dat_train, kernel = "linear", cost = cost_best_svm) )
# # Testing
# tm_test_svm=NA
# tm_test_svm <- system.time(pred_svm <- predict(fit_train_svm, dat_test))
# #Save and load
# saveRDS(tm_train_svm, "../output/tm_train_svm.RDS")
# saveRDS(tm_test_svm, "../output/tm_test_svm.RDS")
# saveRDS(fit_train_svm, "../output/fit_train_svm.RDS")
# saveRDS(pred_svm, "../output/pred_svm.RDS")
# load models and training and testing time
tm_train_svm <- readRDS("../output/tm_train_svm.RDS")
tm_test_svm <- readRDS("../output/tm_test_svm.RDS")
fit_train_svm <- readRDS("../output/fit_train_svm.RDS")
pred_svm <- readRDS("../output/pred_svm.RDS")
# Evaluation
accu_svm <- mean(dat_test$label == pred_svm)
real_label = dat_test$label %>% as.character() %>% as.numeric()
pred_value_svm = pred_svm %>% as.character() %>% as.numeric()
confusionMatrix(pred_svm,dat_test$label)
## Confusion Matrix and Statistics
##
## Reference
## Prediction 0 1
## 0 477 76
## 1 12 35
##
## Accuracy : 0.8533
## 95% CI : (0.8225, 0.8807)
## No Information Rate : 0.815
## P-Value [Acc > NIR] : 0.007712
##
## Kappa : 0.3742
##
## Mcnemar's Test P-Value : 1.87e-11
##
## Sensitivity : 0.9755
## Specificity : 0.3153
## Pos Pred Value : 0.8626
## Neg Pred Value : 0.7447
## Prevalence : 0.8150
## Detection Rate : 0.7950
## Detection Prevalence : 0.9217
## Balanced Accuracy : 0.6454
##
## 'Positive' Class : 0
##
cost_choose <- cost[which.min(err_svm[,1])]
cat("The accuracy of model: cost =", cost_choose, "is", accu_svm*100, "%.\n")
## The accuracy of model: cost = 0.001 is 85.33333 %.
# The AUC for SVM model
AUC_SVM = auc_roc(real_label, pred_value_svm)
AUC_SVM
## [1] 0.8036243
Prediction performance matters, so does the running times for constructing features and for training the model, especially when the computation resource is limited.
Model_performace <- function(time_feature_train, time_feature_test, time_train, time_test){
cat("Time for constructing training features=", time_feature_train[1], "s \n")
cat("Time for constructing testing features=", time_feature_test[1], "s \n")
cat("Time for training model=", time_train[1], "s \n")
cat("Time for testing model=", time_test[1], "s \n")
}
cat("The accuracy of the SVM model: cost =", cost[which.min(err_svm[,1])], "is", accu_svm*100, "%.\n")
## The accuracy of the SVM model: cost = 0.001 is 85.33333 %.
cat("The auc value for the SVM model is",AUC_SVM * 100, "%.\n")
## The auc value for the SVM model is 80.36243 %.
Model_performace(tm_feature_train, tm_feature_test, tm_train_svm, tm_test_svm)
## Time for constructing training features= 1.348 s
## Time for constructing testing features= 0.248 s
## Time for training model= 96.392 s
## Time for testing model= 8.741 s
###Reference - Du, S., Tao, Y., & Martinez, A. M. (2014). Compound facial expressions of emotion. Proceedings of the National Academy of Sciences, 111(15), E1454-E1462.